拼图解决问题,从一组非重叠的无序视觉碎片构建一个连贯的整体,是许多应用的基础,然而,过去二十年的大部分文献都集中在较不太现实的谜题上正方形。在这里,我们正规化一种新型的拼图拼图,其中碎片是通过用任意数量的直切割的全局多边形/图像切割而产生的一般凸多边形,这是由庆祝的懒人辅助er序列的产生模型。我们分析了这种难题的理论特性,包括在碎片被几何噪声被污染时解决它们的固有挑战。为了应对此类困难并获得易行的解决方案,我们摘要作为一种具有分层循环约束和分层重建过程的多体弹簧质量动态系统的问题。我们定义了评估指标,并在普通植物和图案谜题上呈现实验结果,以表明它们是完全自动溶解的。
translated by 谷歌翻译
Understanding the 3D world from 2D images involves more than detection and segmentation of the objects within the scene. It also includes the interpretation of the structure and arrangement of the scene elements. Such understanding is often rooted in recognizing the physical world and its limitations, and in prior knowledge as to how similar typical scenes are arranged. In this research we pose a new challenge for neural network (or other) scene understanding algorithms - can they distinguish between plausible and implausible scenes? Plausibility can be defined both in terms of physical properties and in terms of functional and typical arrangements. Hence, we define plausibility as the probability of encountering a given scene in the real physical world. We build a dataset of synthetic images containing both plausible and implausible scenes, and test the success of various vision models in the task of recognizing and understanding plausibility.
translated by 谷歌翻译
Video synthesis methods rapidly improved in recent years, allowing easy creation of synthetic humans. This poses a problem, especially in the era of social media, as synthetic videos of speaking humans can be used to spread misinformation in a convincing manner. Thus, there is a pressing need for accurate and robust deepfake detection methods, that can detect forgery techniques not seen during training. In this work, we explore whether this can be done by leveraging a multi-modal, out-of-domain backbone trained in a self-supervised manner, adapted to the video deepfake domain. We propose FakeOut; a novel approach that relies on multi-modal data throughout both the pre-training phase and the adaption phase. We demonstrate the efficacy and robustness of FakeOut in detecting various types of deepfakes, especially manipulations which were not seen during training. Our method achieves state-of-the-art results in cross-manipulation and cross-dataset generalization. This study shows that, perhaps surprisingly, training on out-of-domain videos (i.e., videos with no speaking humans), can lead to better deepfake detection systems. Code is available on GitHub.
translated by 谷歌翻译
Generative models are becoming ever more powerful, being able to synthesize highly realistic images. We propose an algorithm for taming these models - changing the probability that the model will produce a specific image or image category. We consider generative models that are powered by normalizing flows, which allows us to reason about the exact generation probability likelihood for a given image. Our method is general purpose, and we exemplify it using models that generate human faces, a subdomain with many interesting privacy and bias considerations. Our method can be used in the context of privacy, e.g., removing a specific person from the output of a model, and also in the context of de-biasing by forcing a model to output specific image categories according to a given target distribution. Our method uses a fast fine-tuning process without retraining the model from scratch, achieving the goal in less than 1% of the time taken to initially train the generative model. We evaluate qualitatively and quantitatively, to examine the success of the taming process and output quality.
translated by 谷歌翻译
Recent advances in deep learning techniques and applications have revolutionized artistic creation and manipulation in many domains (text, images, music); however, fonts have not yet been integrated with deep learning architectures in a manner that supports their multi-scale nature. In this work we aim to bridge this gap, proposing a network architecture capable of rasterizing glyphs in multiple sizes, potentially paving the way for easy and accessible creation and manipulation of fonts.
translated by 谷歌翻译
在张等人提出的意义上,我们研究了产生$(\ delta,\ epsilon)$固定点的甲骨复杂性。[2020]。虽然存在无尺寸的随机算法用于在$ \ widetilde {o}(1/\ delta \ epsilon^3)$一阶Oracle调用中产生此类点算法。另一方面,我们指出,可以将此速率取代以获得平滑函数,仅对对数依赖平滑度参数。此外,我们为此任务建立了几个下限,这些界限适用于任何随机算法,无论有或没有凸度。最后,我们展示了如何找到$(\ delta,\ epsilon)$ - 固定点的收敛速率,以防函数为凸,我们通过证明一般没有有限的时间算法可以使用点来激励这种设置凸功能的小亚级别也小。
translated by 谷歌翻译
一种共同的销售策略涉及让帐户高管(AES)积极联系并与潜在客户联系。但是,并非所有的接触尝试都有积极的效果:有些尝试不会改变客户的决策,而另一些尝试甚至可能会干扰所需的结果。在这项工作中,我们建议使用因果推断来估计与每个潜在客户联系并相应地制定联系政策的效果。我们从在线珠宝市场worthy.com上证明了这种方法。我们研究了有价值的业务流程,以确定相关的决策和结果,并对他们制定的方式进行正式的假设。使用因果工具,我们选择了一个决策点,改善AE接触活动似乎是有希望的。然后,我们制定了一个个性化的政策,建议仅与对其有益的客户联系。最后,我们在3个月内验证了A \ B测试中的结果,从而导致目标人群的项目交付率增加了22%(p值= 0.026)。现在,该政策正在持续使用。
translated by 谷歌翻译
数据科学有可能改善各种垂直领域的业务。尽管狮子的数据科学项目使用了一种预测方法,但这些预测应成为决策。但是,这种两步的方法不仅是最佳的,甚至可能降低性能并使项目失败。另一种选择是遵循规范性的框架,在该框架中,行动是“第一公民”,以便该模型制定规定采取行动的政策,而不是预测结果。在本文中,我们解释了为什么规定的方法很重要,并提供了分步方法论:规定的画布。后者旨在改善项目利益相关者的框架和沟通,包括项目和数据科学经理,以成功地产生业务影响。
translated by 谷歌翻译
了解神经网络记住培训数据是一个有趣的问题,具有实践和理论的含义。在本文中,我们表明,在某些情况下,实际上可以从训练有素的神经网络分类器的参数中重建训练数据的很大一部分。我们提出了一种新颖的重建方案,该方案源于有关基于梯度方法的训练神经网络中隐性偏见的最新理论结果。据我们所知,我们的结果是第一个表明从训练有素的神经网络分类器中重建大部分实际培训样本的结果是可以的。这对隐私有负面影响,因为它可以用作揭示敏感培训数据的攻击。我们在一些标准的计算机视觉数据集上演示了二进制MLP分类器的方法。
translated by 谷歌翻译
我们研究神经网络的基于规范的统一收敛范围,旨在密切理解它们如何受到规范约束的架构和类型的影响,对于简单的标量价值一类隐藏的一层网络,并在其中界定了输入。欧几里得规范。我们首先证明,通常,控制隐藏层重量矩阵的光谱规范不足以获得均匀的收敛保证(与网络宽度无关),而更强的Frobenius Norm Control是足够的,扩展并改善了以前的工作。在证明构造中,我们识别和分析了两个重要的设置,在这些设置中(可能令人惊讶)仅光谱规范控制就足够了:首先,当网络的激活函数足够平滑时(结果扩展到更深的网络);其次,对于某些类型的卷积网络。在后一种情况下,我们研究样品复杂性如何受到参数的影响,例如斑块之间的重叠量和斑块的总数。
translated by 谷歌翻译